How could we improve this question?
Question: Do you support or oppose restoring the Joint Comprehensive Plan of Action (JCPOA) with Iran? (Answers range from strongly support to strongly oppose)
Are we spending too much money on welfare and national defense, or not enough? (Answers range from “too little” to “too much”)
Have you ever joined a political protest? (options are “I have done this”, “I might do this”, or “I would never do this”)
How often do you drink alcohol? (options range from “Never” to “Frequently”)
In the last six months, have you been a victim of a crime? (options are “yes” or “no”)
Do you abuse illegal drugs? (responses are “yes” or “no”)
Do you blame Democrats for the government shutdown? (Answers are “Yes”, “No”, “Not sure”)
There are no hard and fast rules here, sometimes a flawed question is still the best possible option, but there are some general best practices
As many as half of non-voters will claim to have voted in survey questions.
(Hanmer, Banks, and White 2014) randomly gave subtle and unsubtle hints that researchers might check voter roles.
List-experiments are another option for dealing with sensitive topics by indirectly measuring preferences.
Respondents are randomly assigned to receive only non-sensitive items, or non-sensitive items plus one sensitive question.
They only say how many items they agree with.
The difference in the average count of items is used to estimate support.
Some respondents may always pick the “neutral” option, if one is available.
One option is to eliminate the neutral option all together, but there’s a trade-off here: some people genuinely don’t have an opinion!
Balanced agree/disagree items: agreement could = more of a characteristic in some questions, and less of it in others. (“you can’t trust strangers” and “most people are honest”)
Forced choice: instead of agree/disagree, two potentially desirable options could be posed against each other (“is it better to be obedient or creative?”)
Respondents may get worn out, bored, or not really care to give accurate answers.
(this can be especially problematic for surveys that offer rewards for respondents.)
Complex concepts often require more than one thing to measure appropriately.
This can reduce measurement error, but it also gives us some options for validating measurements.
Basic problem: professions of “old-fashioned” racism declined precipitously after the Civil Rights movement.
But most Americans continued to oppose programs designed to improve racial equality.
Symbolic racism (or racial resentment, or modern racism) is a proposed explanation for this phenomenon.
Described as a blend of “anti-black affect” and “traditional conservatism” with four components:
Scale: Symbolic Racism 2000 scale
Some major critiques:
Questions are tautological
The questions mostly just measure conservatism.
The construct isn’t really distinct from old-fashioned racism.
Social Desirability
Everyone wants to be liked. Respondents will give answers that make them look good. Maybe even subconsciously.